54 research outputs found

    Estimating Population Abundance Using Sightability Models: R SightabilityModel Package

    Get PDF
    Sightability models are binary logistic-regression models used to estimate and adjust for visibility bias in wildlife-population surveys (Steinhorst and Samuel 1989). Estimation proceeds in 2 stages: (1) Sightability trials are conducted with marked individuals, and logistic regression is used to estimate the probability of detection as a function of available covariates (e.g., visual obstruction, group size). (2) The fitted model is used to adjust counts (from future surveys) for animals that were not observed. A modified Horvitz-Thompson estimator is used to estimate abundance: counts of observed animal groups are divided by their inclusion probabilites (determined by plot-level sampling probabilities and the detection probabilities estimated from stage 1). We provide a brief historical account of the approach, clarifying and documenting suggested modifications to the variance estimators originally proposed by Steinhorst and Samuel (1989). We then introduce a new R package, SightabilityModel, for estimating abundance using this technique. Lastly, we illustrate the software with a series of examples using data collected from moose (Alces alces) in northeastern Minnesota and mountain goats (Oreamnos americanus) in Washington State

    Used-habitat calibration plots: a new procedure for validating species distribution, resource selection, and step-selection models

    Get PDF
    “Species distribution modeling” was recently ranked as one of the top five “research fronts” in ecology and the environmental sciences by ISI's Essential Science Indicators (Renner and Warton 2013), reflecting the importance of predicting how species distributions will respond to anthropogenic change. Unfortunately, species distribution models (SDMs) often perform poorly when applied to novel environments. Compounding on this problem is the shortage of methods for evaluating SDMs (hence, we may be getting our predictions wrong and not even know it). Traditional methods for validating SDMs quantify a model's ability to classify locations as used or unused. Instead, we propose to focus on how well SDMs can predict the characteristics of used locations. This subtle shift in viewpoint leads to a more natural and informative evaluation and validation of models across the entire spectrum of SDMs. Through a series of examples, we show how simple graphical methods can help with three fundamental challenges of habitat modeling: identifying missing covariates, non-linearity, and multicollinearity. Identifying habitat characteristics that are not well-predicted by the model can provide insights into variables affecting the distribution of species, suggest appropriate model modifications, and ultimately improve the reliability and generality of conservation and management recommendations

    COMPARING STRATIFICATION SCHEMES FOR AERIAL MOOSE SURVEYS

    Get PDF
    Stratification is generally used to improve the precision of aerial surveys. In Minnesota, moose (Alces alces) survey strata have been constructed using expert opinion informed by moose density from previous surveys (if available), recent disturbance, and cover-type information. Stratum-specific distributions of observed moose from plots surveyed during 2005-2010 overlapped, suggesting some improvement in precision might be accomplished by using a different stratification scheme. Therefore, we explored the feasibility of using remote-sensing data to define strata. Stratum boundaries were formed using a 2-step process: 1) we fit parametric and non-parametric regression models using land-cover data as predictors of observed moose numbers; 2) we formed strata by applying classical rules for determining stratum boundaries to the model-based predictions. Although land-cover data and moose numbers were correlated, we were unable to improve upon the current stratification scheme based on expert opinion

    Best practices and software for themanagement and sharing of camera trap data for small and large scales studies

    Get PDF
    Camera traps typically generate large amounts of bycatch data of non-target species that are secondary to the study’s objectives. Bycatch data pooled from multiple studies can answer secondary research questions; however, variation in field and data management techniques creates problems when pooling data from multiple sources. Multi-collaborator projects that use standardized methods to answer broad-scale research questions are rare and limited in geographical scope. Many small, fixed-term independent camera trap studies operate in poorly represented regions, often using field and data management methods tailored to their own objectives. Inconsistent data management practices lead to loss of bycatch data, or an inability to share it easily. As a case study to illustrate common problems that limit use of bycatch data, we discuss our experiences processing bycatch data obtained by multiple research groups during a range-wide assessment of sun bears Helarctos malayanus in Southeast Asia. We found that the most significant barrier to using bycatch data for secondary research was the time required, by the owners of the data and by the secondary researchers (us), to retrieve, interpret and process data into a form suitable for secondary analyses. Furthermore, large quantities of data were lost due to incompleteness and ambiguities in data entry. From our experiences, and from a review of the published literature and online resources, we generated nine recommendations on data management best practices for field site metadata, camera trap deployment metadata, image classification data and derived data products. We cover simple techniques that can be employed without training, special software and Internet access, as well as options for more advanced users, including a review of data management software and platforms. From the range of solutions provided here, researchers can employ those that best suit their needs and capacity. Doing so will enhance the usefulness of their camera trap bycatch data by improving the ease of data sharing, enabling collaborations and expanding the scope of research

    Association of Hyponatremia on Mortality in Cryptococcal Meningitis: A Prospective Cohort.

    Get PDF
    BACKGROUND: Sodium abnormalities are frequent in CNS infections and may be caused by cerebral salt wasting, syndrome of inappropriate antidiuretic hormone secretion (SIADH), or medication adverse events. In cryptococcal meningitis, the prevalence of baseline hyponatremia and whether hyponatremia adversely impacts survival is unknown. METHODS: We conducted a secondary analysis of data from two randomized trials of HIV-infected adult Ugandans with cryptococcal meningitis. We grouped serum sodium into 3 categories: <125, 125-129, and 130-145 mmol/L. We assessed whether baseline sodium abnormalities were associated with clinical characteristics and survival. RESULTS: Of 816 participants with cryptococcal meningitis, 741 (91%) had a baseline sodium measurement available: 121 (16%) had Grade 3-4 hyponatremia (<125 mmol/L), 194 (26%) had Grade 2 hyponatremia (125-129 mmol/L), and 426 (57%) had a baseline sodium of 130-145 mmol/L. Hyponatremia (<125 mmol/L) was associated with higher initial CSF quantitative culture burden (P < .001), higher initial CSF opening pressure (P < 0.01), lower baseline Glasgow Coma Score (P < 0.01), and a higher percentage of baseline seizures (P = .03). Serum sodium <125 mmol/L was associated with increased 2-week mortality in unadjusted and adjusted survival analyses; adjusted hazard ratio of 1.87 (95%CI, 1.26 to 2.79; p < 0.01) compared to those with sodium 130-145 mmol/L. CONCLUSIONS: yponatremia is common in cryptococcal meningitis and is associated with excess mortality. A standardized management approach to correctly diagnose and correct hyponatremia in cryptococcal meningitis needs to be developed and tested

    Data from: A hidden Markov model to identify and adjust for selection bias: an example involving mixed migration strategies

    No full text
    An important assumption in observational studies is that sampled individuals are representative of some larger study population. Yet, this assumption is often unrealistic. Notable examples include online public-opinion polls, publication biases associated with statistically significant results, and in ecology, telemetry studies with significant habitat-induced probabilities of missed locations. This problem can be overcome by modeling selection probabilities simultaneously with other predictor–response relationships or by weighting observations by inverse selection probabilities. We illustrate the problem and a solution when modeling mixed migration strategies of northern white-tailed deer (Odocoileus virginianus). Captures occur on winter yards where deer migrate in response to changing environmental conditions. Yet, not all deer migrate in all years, and captures during mild years are more likely to target deer that migrate every year (i.e., obligate migrators). Characterizing deer as conditional or obligate migrators is also challenging unless deer are observed for many years and under a variety of winter conditions. We developed a hidden Markov model where the probability of capture depends on each individual's migration strategy (conditional versus obligate migrator), a partially latent variable that depends on winter severity in the year of capture. In a 15-year study, involving 168 white-tailed deer, the estimated probability of migrating for conditional migrators increased nonlinearly with an index of winter severity. We estimated a higher proportion of obligates in the study cohort than in the population, except during a span of 3 years surrounding back-to-back severe winters. These results support the hypothesis that selection biases occur as a result of capturing deer on winter yards, with the magnitude of bias depending on the severity of winter weather. Hidden Markov models offer an attractive framework for addressing selection biases due to their ability to incorporate latent variables and model direct and indirect links between state variables and capture probabilities

    Sandhill Crane Nest Habitat Selection and Factors Affecting Nest Success in Northwestern Minnesota

    Get PDF
    We studied 62 greater sandhill crane (Grus canadensis tabida) nests in northwestern Minnesota during 1989-1991 to document nest habitat use and selection, nest success, and factors associated with nest success. We recorded 15 habitat variables at each nest and at a randomly selected site in the same wetland. Nests were in basins 0.01-601 ha (Median = 2.2 ha) and at water depths 0-35.7 cm (Median = 9.7 cm). Cattail (Typha sp.) was the dominant vegetation at 58.0% of nests while 21.0% were at sites dominated by phragmites (Phragmites australis). Conditional logistic regression models indicated that locations with lower concealment indices, lower log sedge (Carex sp.) stem counts, and higher log phragmites stem counts were more likely to be associated with nest sites. Estimated nest success was 56% (Apparent), 40% (Mayfield), and 47% (logistic-exposure model). Most nest failures appeared due to mammalian predation. Nest depredation appeared to increase as nest initiation dates became later, but after accounting for differences in exposure times, this difference was no longer evident. Year had the strongest effect on nest success with the lowest success recorded in 1990, a dry spring. Logistic exposure models suggested that nest success tended to increase with increasing water depth at the nest site or as concealment indices decreased

    Growth Rates and Variances of Unexploited Wolf Populations in Dynamic Equilibria: Data, R Code, and Supporting Results

    No full text
    This dataset contains four files. PopulationModels.R is an R script defining functions used to fit density-independent and Ricker population models to associated time series data. With these functions, population measurements can be modeled under three different measurement assumptions: i) measured without error; ii) measured with Poisson error; or iii) measured with log-normal error. MechFieberg.R is a R script that will run all analyses supporting the findings in Mech and Fieberg (2014). MechFieberg.html is a summary of the output expected when running the MechFieberg.R script. Wolfdat.csv is the raw data file containing the wolf home range measurements. The four columns in this data correspond to the year of measurement (YR), and the location of measurement: Denali National Park (Denali), Isle Royale (IsleRoyale), and Superior National Forest (SNF).These files contain data and R code (along with associated output from running the code) supporting all results reported in: Mech, D. and J. Fieberg. 2014. Growth Rates and Variances of Unexploited Wolf Populations in Dynamic Equilibria. Wildlife Society Bulletin. In Mech and Fieberg (2014), we analyzed natural, long-term, wolf-population-density trajectories totaling 130 years of data from three areas: Isle Royale National Park in Lake Superior, Michigan; the east-central Superior National Forest in northeastern Minnesota; and Denali National Park, Alaska. We fit density-independent and Ricker models to each time series, allowing for 3 different assumptions regarding observation error (no error, Poisson or Log-normal observation error). We suggest estimates of the population-dynamic parameters can serve as benchmarks for comparison with those calculated from other wolf populations repopulating other areas

    A fresh look at an old concept: home-range estimation in a tidy world

    No full text
    A rich set of statistical techniques has been developed over the last several decades to estimate the spatial extent of animal home ranges from telemetry data, and new methods to estimate home ranges continue to be developed. Here we investigate home-range estimation from a computational point of view and aim to provide a general framework for computing home ranges, independent of specific estimators. We show how such a workflow can help to make home-range estimation easier and more intuitive, and we provide a series of examples illustrating how different estimators can be compared easily. This allows one to perform a sensitivity analysis to determine the degree to which the choice of estimator influences qualitative and quantitative conclusions. By providing a standardized implementation of home-range estimators, we hope to equip researchers with the tools needed to explore how estimator choice influences answers to biologically meaningful questions

    R Code and Output Supporting "Accounting for individual-specific variation in habitat-selection studies: Efficient estimation of mixed-effects models using Bayesian or frequentist computation"

    No full text
    See readme.txt for a description of the files in this repository.This repository contains data and R code (along with associated output from running the code) for fitting resource-selection functions and step-selection functions with random effects, supporting all results reported in: Muff, S., Signer, J. and Fieberg, J., 2018. Accounting for individual-specific variation in habitat-selection studies: Efficient estimation of mixed-effects models using Bayesian or frequentist computation. bioRxiv, p.411801
    corecore